Stable Prediction with Model Misspecification and Agnostic Distribution Shift
نویسندگان
چکیده
منابع مشابه
Informational herding with model misspecification
This paper demonstrates that a misspecified model of information processing interferes with long-run learning and allows inefficient choices to persist in the face of contradictory public information. I consider an observational learning environment where agents observe a private signal about a hidden state, and some agents observe the actions of their predecessors. Prior actions aggregate mult...
متن کاملDistribution-Specific Agnostic Boosting
We consider the problem of boosting the accuracy of weak learning algorithms in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992). Known algorithms for this problem (BenDavid et al., 2001; Gavinsky, 2002; Kalai et al. , 2008) follow the same strategy as boosting algorithms in the PAC model: the weak learner is executed on the same target function but over different dis...
متن کاملNothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and e...
متن کاملRobust control and model misspecification
A decision maker fears that data are generated by a statistical perturbation of an approximating model that is either a controlled diffusion or a controlled measure over continuous functions of time. A perturbation is constrained in terms of its relative entropy. Several different two-player zero-sum games that yield robust decision rules and are related to one another, to the max-min expected ...
متن کاملAgnostic Distribution Learning via Compression
We prove that Θ̃(kd2/ε2) samples are necessary and sufficient for learning a mixture of k Gaussians in Rd, up to error ε in total variation distance. This improves both the known upper bound and lower bound for this problem. For mixtures of axis-aligned Gaussians, we show that Õ(kd/ε2) samples suffice, matching a known lower bound. Moreover, these results hold in an agnostic learning setting as ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2020
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v34i04.5876